专利摘要:
a plant treatment platform uses a plant detection model to detect plants as the plant treatment platform crosses a field. the plant treatment platform receives image data from a camera that captures images of plants (for example, crops or weeds) that grow in the field. the plant treatment platform applies pre-processing functions to the image data to prepare the image data for processing by the plant detection model. for example, the plant treatment platform can reformat the image data, adjust the resolution or aspect ratio, or crop the image data. the plant treatment platform applies the plant detection model to pre-processed image data to generate bounding boxes for plants. the plant treatment platform can apply treatment to the plants based on the result of the machine learning model.
公开号:BR112019023576A2
申请号:R112019023576-0
申请日:2018-05-09
公开日:2020-06-02
发明作者:Kamp Redden Lee;Grant Padwick Christopher;Radhakrishnan Rajesh;Patrick Ostrowski James
申请人:Blue River Technology Inc.;
IPC主号:
专利说明:

METHOD AND MEDIA LEGIBLE BY COMPUTER
Historic
[0001] Conventional crop treatment systems in a field usually apply treatments to all plants in the field or to entire plant zones within a field. For example, a plant treatment system can use a sprayer that uniformly treats all plants in a field or zone with the same treatment, without considering each plant individually. These systems have significant disadvantages. A major disadvantage in the case of a spray-type treatment is that the treatment fluid is usually applied freely throughout the area or field, which results in significant waste. Especially for fertilizer treatments, the overuse of a nitrogen-containing fertilizer is harmful to the environment as a whole. In addition, in these systems, crops and weeds are treated with fertilizers or other benefits equally, unless there is a separate effort to remove weeds before treatment. This manual effort is expensive and time-consuming, and does not necessarily result in the removal of all weeds.
[0002] To obtain an accurate application of the treatment to the plants, farmers can apply the treatment to the plants manually. However, these methods are exceptionally laborious and therefore expensive, especially for any form of modern agriculture carried out on a scale.
summary
[0003] A mobile treatment platform uses a plant detection model to detect and identify plants in
Petition 870190115090, of 11/08/2019, p. 14/84
2/40 image data captured by a camera on the mobile treatment platform, as the platform travels through a crop field. Specifically, the model is able to distinguish between crops and weeds in general, and is also able to distinguish between several varieties of plants and several varieties of weeds. The mobile treatment platform receives image data from the camera and applies pre-processing functions to the image data. Preprocessing functions can perform any of several functions, such as adapting the image data to be consistent with the image data used to train the plant detection model and / or improve specificity, sensitivity and / or efficiency end of the plant detection model. Examples of pre-processing that can be performed include, but are not limited to, application of the Bayer algorithm, cropping, white balance, scaling, exposure control or normalization of values.
[0004] In one embodiment, the plant detection model is configured to generate bounding boxes that delimit parts of the image data that, according to the model identification, represent different plants. The model associates bounding boxes with a predicted plant species (also referred to here as the type of plant) and a confidence measurement.
[0005] The plant detection model can also be used to detect areas where the mobile treatment platform has already treated the plants. For example, if the mobile treatment platform applies liquid treatment to plants, the plant detection model can generate boxes
Petition 870190115090, of 11/08/2019, p. 15/84
3/40 delimiters that describe parts of the image data that represent areas that have already been treated by the mobile treatment platform and / or areas that have not yet been treated.
[0006] The plant detection model can be generated using a variety of machine learning tools, including, but not limited to, neural networks, support vector machines and decision trees. In one embodiment, the plant detection model is a modified version of the Single Shot MultiBox Detector (SSD) neural network. In this embodiment, the plant detection model is modified from the baseline SSD neural network to improve the specificity, sensitivity and / or efficiency of the plant detection model. In various implementations, the modified SSD model can include, among others, any of the following processing techniques: batch normalization, linear rectified units with leakage, residual neural networks, custom anchor boxes, clean labeled data, increased spatial resolution on maps resources, space transformers, training loss optimization, weighted softmax, fusion of resource maps, background mining, training enhancements and uncertainty-based recycling.
[0007] The mobile treatment platform uses the output of the plant detection model to control the treatment of plants in the field, as the mobile treatment platform crosses the field. For example, the mobile treatment platform can use the boundary box locations that identify individual plants in the image data to direct the application of a fertilizer to those plants. Gives
Petition 870190115090, of 11/08/2019, p. 16/84
4/40 likewise, the mobile treatment platform can use the boundary box locations that identify individual weeds in the image data to apply a herbicide to weeds. To achieve this specificity at the plant level, the mobile treatment platform may include appropriate internal hardware to store treatment materials and apply them at a level of granularity specific to the plant.
[0008] The mobile treatment platform, including the plant detection model, allows the quick determination of the location of individual plants, in order to carry out appropriate treatments of individual plants. This allows the mobile treatment platform to locate treatment on individual plants, rather than treating the entire field, without the disadvantage of requiring excessive manual labor and / or avoiding the excessive use of treatment materials.
Brief description of the drawings
[0009] FIG. 1 illustrates a system architecture for an exemplary mobile treatment platform, according to an embodiment.
[0010] FIG. 2 illustrates a cut of original image data, according to an embodiment.
[0011] FIG. 3 illustrates the structure of an example Single Shot MultiBox detector model, according to an embodiment.
[0012] FIG. 4 illustrates two graphs from the original SSD publication that show the accuracy of the baseline models SSD300 and SSD500 for extra small, small, medium, large and extra large objects.
[0013] FIGS. 5 and 6 illustrate two graphs of the performance of
Petition 870190115090, of 11/08/2019, p. 17/84
5/40 a modified SSD model for different training improvement parameters, according to some embodiments.
[0014] FIGS. 7A and 7B illustrate improvements to a plant detection model from the baseline SSD model to generate bounding boxes, according to one embodiment.
[0015] FIGS. 7C and 7D illustrate improvements to a plant detection model from the baseline SSD model to generate bounding boxes, according to another embodiment.
[0016] FIGS. 7E and 7F illustrate improvements to a plant detection model from the baseline SSD model to generate bounding boxes, in accordance with yet another embodiment.
[0017] FIGS. 8A, 8B and 8C illustrate an example of implementing a plant detection model that identifies bounding boxes for treated soil stains on dark soil, light soil and with light leakage under a cover used to normalize the light in each case , according to an embodiment.
[0018] FIG. 9 is a flow chart illustrating an exemplary method for identifying bounding boxes for plants, according to an embodiment.
[0019] FIG. 10 is a flow chart illustrating an exemplary method for identifying boundary boxes for treated areas, according to an embodiment.
Detailed Description
I. Mobile Treatment Platform
[0020] FIG. 1 illustrates a system architecture for a
Petition 870190115090, of 11/08/2019, p. 18/84
6/40 exemplary mobile treatment platform 100, according to some embodiments. The mobile treatment platform 100 includes a treatment mechanism 110, a camera 120, a transport mechanism 130 configured to move the entire platform 100 across the field and a computer 140. Although only single instances of these elements are shown and described, in practice the platform 100 can contain more than one of each of these elements.
[0021] Treatment mechanism 110 applies treatment to plants within a field while the mobile treatment platform 100 traverses that field. For example, treatment mechanism 110 may include one or more sprayers and one or more containers that retain treatment fluids to be applied to plants through treatment mechanism 110. Fluid treatments may include, but are not limited to, fertilizers, herbicides, pesticides, fungicides, insecticides and growth regulators. Treatment mechanism 110 may also include scissors or other cutting mechanisms for pruning, high pressure water jets for pruning or removing crops or weeds, and electrodes for applying an electrical discharge to crops and weeds.
[0022] Treatment mechanism 110 receives treatment instructions from computer 140 and treats plants based on treatment instructions. Treatment instructions can come in several forms and can include timing instructions to activate and deactivate treatment mechanism 110. For example, treatment mechanism 110 can activate a specific spray when receiving an activation signal from computer 140 and can deactivate the sprayer when receiving a
Petition 870190115090, of 11/08/2019, p. 19/84
7/40 deactivation signal. As another example, treatment mechanism 110 can receive activation and deactivation times, and treatment mechanism 110 can use an internal clock or other timing mechanism to determine when to activate and deactivate based on the times received.
[0023] Treatment instructions can also include directional instructions that specify a direction for treatment mechanism 110. For example, directional instructions can be specified in Cartesian, polar or other coordinates. Directional instructions can be coded to indicate a placement / orientation of the treatment mechanism 10, or can be coded to indicate where a treatment should be applied, with treatment mechanism 110 configured to translate the instructions to determine how the treatment mechanism 110 will be repositioned / reoriented to carry out the treatment. Accordingly, the treatment mechanism 110 can be suitably configured to translate, rotate or otherwise be manipulated to effect treatment at the location specified by the directional instructions. For a sprayer, this may include rotating the sprayer so that the spray reaches an area determined by the treatment instructions.
[0024] Camera 120 is physically positioned on platform 100 to capture image data of plants within the field, and generally based on positioning it also captures part of the soil in which the plants are planted. Camera 120 captures image data while the platform physically moves across the field. The image data includes still images, however, the speed of
Petition 870190115090, of 11/08/2019, p. 20/84
8/40 capture can vary and, in practice, images can be captured at sufficient speed so that they can be used and processed as video data, if desired. The capture speed can vary, camera 120 can capture image data at a specific speed, based on time or after platform 100 has traveled a fixed physical distance. Alternatively, camera 120 can capture a new image / set of image data each time a new plant enters the camera's field of view.
[0025] A wide variety of cameras, with different resources and captured light spectra, can be used. Examples include, but are not limited to, RGB cameras, near-infrared cameras (for example, red-border or short-wave infrared), ultraviolet cameras, and multispectral cameras. Cameras generally use CMOS digital image sensors, but they can also use CCD image sensors. More than one camera 140 can be used, as a first camera located in front of the treatment mechanism 110 along the direction of travel and a second camera located after the treatment mechanism 110 along the direction of travel of the transportation mechanism 130. The camera can capture image data from the top-down perspective, from the side perspective or from an angled perspective.
[0026] The transport mechanism 130 moves the mobile treatment platform 100 across the field. The transport mechanism 130 may be a wheeled motor vehicle. Alternatively, the transport mechanism 130 may include a hitch that allows the mobile treatment platform 100 to be attached to a separate vehicle to be towed by the
Petition 870190115090, of 11/08/2019, p. 21/84
9/40 field. The transport mechanism 130 may further include an odometer that allows the mobile treatment platform 100 to determine the distance traveled by the mobile treatment platform 100.
[0027] Computer 140 provides computational resources for mobile treatment platform 100. Computer 140 may comprise a processing unit (for example, one or more of a CPU, GPU or FPGA) and a data storage medium (for example, static or dynamic memory). In one embodiment, computer 140 comprises a deep learning GPU that is configured to effectively execute a deep learning neural network. For example, computer 140 may include an NVIDIA GeForce® GTXTM TITAN X card using the Caffe deep learning framework or NVIDIA Txl or Tx2 using the Tensorflow deep learning framework.
[0028] Computer 140 may also include communication elements, such as busbars, input / output terminals and other computer hardware sufficient to communicate and control the operation of one or more treatments. More specifically, the image data passed to the computer instructions can be transmitted to computer 140 for processing using any type of transmission protocol. For example, the OSI (Open Systems Interconnect) model can be used to send image data from camera 120 to computer 140 using Ethernet connections between these components. The instructions generated by computer 140 can then be transmitted to treatment mechanisms 110 using Ethernet connections, network area bus connections
Petition 870190115090, of 11/08/2019, p. 22/84
10/40 controller (CAN) or other transmission protocol.
[0029] Computer 140 stores software instructions that describe a series of logical components that determine the performance of the tasks that platform 100 is capable of performing. Examples of such tasks include, but are not limited to, capturing image data, processing image data and other data to generate a treatment instruction and using the plant detection model output to control treatment mechanism 110 with the treatment instructions. Despite the name plant detection module, this includes both crop / weed detection and sprayer location, as shown above. In one embodiment, these logic components include a plant detection module 150, which includes a pre-processing module 160, a training module 170, a plant detection model 180 and a treatment application module 190 However, additional modules, in a smaller or different quantity, can be included in other embodiments.
[0030] In a specific embodiment, the plant detection module 150 communicates with a remotely located computer server (not shown) that is not physically present on the platform computer 140 and some or all the features of the detection module 150 plants are performed by the server. In such an embodiment, the image data and any other relevant data, such as the rate and direction of travel of the transport mechanism 130 are retransmitted to the remote server and the treatment instructions are retransmitted back to the computer 140 for execution
Petition 870190115090, of 11/08/2019, p. 23/84
11/40 by the treatment mechanism 110.
[0031] The plant detection module 150 uses a trained detection model 180 to detect and classify plants or parts of plants, bounding boxes and spray patterns, and can also be configured to detect other items present in the images, such as waste plants or accumulations of dirt. The plant detection module 150 is described below.
II Detection of Plants and Spray Pattern II.A. Pre-processing of image data
[0032] The pre-processing module 160 pre-processes the image data received from the camera 120. The pre-processing module 160 can pre-process training image data in preparation for training the plant detection model 180, as well as for image data collected in the field for the actual use of the plant detection model 180. Listed below are some examples of pre-processing steps that the plant detection model 180 can apply to image data. In various embodiments, any one or more of these techniques can be used to pre-process image data for training or model use.
[0033] Application of the Bayer algorithm: The preprocessing module 160 can apply a Bayer algorithm to the image data using pixel values received directly from the camera's image sensor. Preprocessing module 160 can use any of a variety of techniques to perform the application of the Bayer algorithm, examples of which include, but are not limited to, the techniques of the nearest neighbor, linear interpellation, cubic interpellation,
Petition 870190115090, of 11/08/2019, p. 24/84
12/40 high quality linear interpellation and smooth tint transition interpellation. In some embodiments, the preprocessing module 160 applies a Bayer algorithm to the image data by passing raw A2D values into the image data via an embedded FPGA that returns a 24-bit RGB color-corrected value from the values 10-bit A2D. The preprocessing module 160 can perform the application of the Bayer algorithm to the image data by means of pixel binning, which can be any of analog binning, interpellation binning or binning after conversion from analog to digital. Alternatively, pixel separation can be performed on camera 120 before pixel values are generated from camera 140. [0034] Cut: Preprocessing module 160 can cut image data to control image sizes being processed by the plant detection model 180. The preprocessing module 160 can cut the image data to remove parts of the image data that are probably not related to a particular task. For example, if the platform is traveling along a line of cultures (for example, between or over cultures), preprocessing module 160 can cut captured image data from a borderline distance outside the lines and / or the a borderline distance from areas of expected growth of relevant crops or weeds. The underlying assumption is that the cultivated portions are likely to be irrelevant to crop and weed detection. In some embodiments, the preprocessing module 160 cuts the image data by cutting sensor data from the camera sensors before
Petition 870190115090, of 11/08/2019, p. 25/84
13/40 sensor data are converted to image data.
[0035] FIG. 2 illustrates a section 200 of original image data 210 that reduces the amount of image data processed by the plant detection model 180 while, at the same time, ensuring that the entire plant 220 is captured, according to an embodiment . FIG. 2 illustrates the successive capture of several images 230 as the platform passes along the row. While FIG. 2 illustrates an example of capture volume for an exemplary camera (not shown), other implementations where the camera and the platform pass along the line are also possible. The preprocessing module 160 can cut the image data to ensure that no blind spots occur in a capture area. The preprocessing module 160, and the physical infrastructure 100 of the platform more generally, can do this taking into account a variety of fixed constraints or variable parameters, examples of which include, but are not limited to, a harvest height expected range or height range (for example, 0 to 12 inches high), a fixed or controllable camera position (for example, height above the ground, geometry and orientation in relation to the ground plane), a variable rate of displacement (for example, a range of 0 to 6 miles per hour or more) and a variable camera shutter speed that can vary with other camera parameters, such as image sensor gain.
[0036] White balance: The preprocessing module 160 can balance the image data to normalize the colors of the captured image. The preprocessing module 160 can white balance the
Petition 870190115090, of 11/08/2019, p. 26/84
14/40 image data based on one or more of several factors, examples of which include, but are not limited to: time of day when image data was obtained, if artificial lighting was used to capture data from image and whether a cover was used to block or diffuse sunlight. This ensures consistent white balance for the images, regardless of the circumstances, in order to ensure consistent processing using the 180 plant detection model.
[0037] Resizing: The preprocessing module 160 resizes the image data to ensure that each instance (image) of the image data stores a fixed pixel resolution for a given geographic area. For example, the preprocessing module 160 can resize images to ensure a constant pixel per inch (PPI) ratio in the images. For example, this may include reduced or enlarged sampling of the image data to decrease or increase the PPI. Resizing can be done for several reasons. For example, image data received from a camera 120 mounted on a platform may not have the same PPI as the image data originally used to train the plant detection model 180. This can occur when the camera 120 configuration on the platform mobile treatment method 100 is different from the setting used to capture training image data. For example, if the camera can be positioned at one of several different heights in different runs of the platform over the field or over different fields, the preprocessing module 160 can resize the image (along with one or more of the other techniques mentioned in this section)
Petition 870190115090, of 11/08/2019, p. 27/84
15/40 to ensure a consistent PPI for images fed into the 180 plant detection model from these separate runs.
[0038] Exposure control: The preprocessing module 160 can adjust the exposure of the image data. As with white balance, the preprocessing module 160 adjusts the exposure to ensure uniformity, as far as possible, of exposure through images collected within a given run across a field, as well as between runs. The preprocessing module 160 can adjust parameters related to camera exposure 120 dynamically as the platform passes through the field or can adjust the exposure of image data algorithmically after capturing the image data. Examples of camera parameters that can be adjusted to control exposure include, but are not limited to, shutter speed and gain control on the camera's 120 image sensor.
[0039] Normalization of values: The preprocessing module 160 can normalize the pixel values of the image data to reduce deviations when the images are inserted in the plant detection model 180 for use or training. For example, the preprocessing module 160 can adjust pixel values (for example, individual RGB values for specific pixels) to obtain zero mean and unit variation, and their values can be normalized to be within the range [ -1, 1], [0.1] or any other normalization. In some embodiments, if each pixel contains a value for more than one channel (for example, not only monochrome color, but RGB, CIE 1931 or
Petition 870190115090, of 11/08/2019, p. 28/84
16/40
HSV), the preprocessing module 160 normalizes the pixel values associated with each channel to obtain zero mean and unit variation.
II.B. Image data labeled for training
[0040] Training module 170 generates and / or accesses labeled image data, also referred to here as labeled training data and labeled image data, to train the 180 plant detection model. Labeled image data includes bounding boxes which describe the plant boundaries in the image data and also include labels on which bounding boxes are associated with crops and weeds. The data can also identify the specific plant species associated with each bounding box. Additionally or alternatively, the labeled image data can specify bounding boxes that identify treated parts of the soil, as well as metadata that represent which pixels are associated with the treated soil and which are not.
[0041] Although the embodiment shown in FIG. 1 illustrate the training module 170 being stored by the computer 140 on the mobile treatment platform 100, in practice the training module 170 can be implemented by a remote computer server (not shown) in communication with the mobile treatment platform 100. In in such an embodiment, training module 170 can train plant detection model 180 on the remote server and the remote server can transmit trained plant detection model 180 to the mobile treatment platform 100 to be used in the field.
[0042] There are several possible sources of image data
Petition 870190115090, of 11/08/2019, p. 29/84
17/40 labeled. In one embodiment, training module 170 transmits image data to human markers to generate and respond with the labeled image data. In addition, training module 170 generates additional labeled image data to train the plant detection model 170 by dividing labeled image data into multiple images that may or may not overlap each other in relation to the physical region of the field captured in those images . This technique can also be called mosaicism.
[0043] If a labeled image has a higher resolution (that is, a larger number of pixels) than the resolution of the image data that is used to train the 180 plant detection model, the labeled image can be divided into images smaller labeled tiles (mosaics) that are used to train the plant detection model 170. For example, if a labeled image is twice the size of the image data used to train the plant detection model 180, the training module 170 you can split the labeled image in half and use each half of the labeled image to train the 180 plant detection model. This helps to ensure consistency between the images used to train the model and the images captured by the camera for use in performing the task concerned (for example, identification of bounding boxes, localized spray patterns).
11.C. Model Tasks
[0044] As discussed above, the plant detection model 180 (or a similar model, such as a spray box detection model) can be trained to perform one or more tasks using one or more submodels.
[0045] One task is to identify bounding boxes that
Petition 870190115090, of 11/08/2019, p. 30/84
18/40 specify where the plants / crops / weeds / species are physically located in the soil in the field, as represented by the image data and the types of plants within each bounding box. The output of this task is, for each image, the locations and sizes of the bounding boxes for plants in the images. The output may also include a numerical confidence of the plant detection model 180 in its prediction in relation to the bounding box. Together, as will be described in Section III below, the bounding boxes, and in some implementations the numerical confidences, are used to determine an action taken by platform 100.
[0046] Furthermore, although the present disclosure relates mainly to the detection of plants in the image data, the principles of the spray detection model described here can be adapted to implement other types of detection models for the detection of other resources in image data as the mobile treatment platform 100 travels across a field. For example, another task is to detect, with a spray box detection model similar to the plant detection model, bounding boxes in relation to the soil and / or plants that have already been treated by the treatment mechanism 110. This task is similar to task of identifying bounding boxes for plants, except that in this task the plant detection model 180 is trained on labeled image data in relation to liquid treatment sites instead of labeled image data of plants / crops / weeds / species . II.D. General model structure
[0047] At the core, the 180 plant detection model is a
Petition 870190115090, of 11/08/2019, p. 31/84
19/40 supervised machine learning model that describes a functional relationship between image data and predictions related to the categorization of image data in bounding boxes (for tasks related to bounding boxes) or some other scheme, such as plant species. The plant detection model 180 is generally a parameterized model, whereby a set of parameters representing the characteristics learned from the problem space have associated parameter values that are learned through training based on the labeled training data. The exact shape of the parameters and values depends on the type of supervised machine learning technique used to implement the model. For example, in the case of a tree-based model, the parameter values can be described as critical values, while in the case of a neural network model, these parameter values can be referred to as weights and the parameters as resources.
[0048] Generally, the model 180 is trained by inserting the labeled training data, including the image data and the labels in the function (or set of functions that represent the model). The parameter values are then learned and stored together with the function. Together, the values of the functions and parameters are the digital structural representation of the model. The model is stored in computer memory 140 and can be accessed when used, for example, when platform 100 is driven across the field. During use, new image data is received and inserted into the model, that is, in the function and associated parameter values, and an output is generated that represents the
Petition 870190115090, of 11/08/2019, p. 32/84
20/40 model 180 predictions regarding boundary box locations.
[0049] Throughout this description, a general set of implementations of the plant detection model 180 is described as a neural network model for convenience of description and as a prototype example. However, in practice, a wide variety of different types of supervised machine learning techniques can be used in place of a neural network, examples of which include, among others, tree-based models, support vector machines and so on. against.
II. D.1. Modified SSD model
[0050] In one embodiment, the plant detection model 180 is based on the Single Shot MultiBox Detector (SSD) model originally described for object detection, generally used with standard images (hereinafter referred to as the SSD baseline model) convenience). SSD: Single Shot MultiBox Detector, arXiv: 1512.02325 (29 dec. 2016), available at https://arxiv.org/pdf/1512.02325.pdf. However, the 180 models described in this document implement modifications to the baseline SSD model (the modified model) that improve the performance of the installation detection model 180, particularly for accuracy with respect to the detection of relatively small bounding boxes that perform poorly on the standard SSD model. These modifications improve the detection of the sizes of bounding boxes relevant to images containing plants, which makes the modified SSD model better for use in performing the tasks identified above.
[0051] The modified SSD model comprises a series of
Petition 870190115090, of 11/08/2019, p. 33/84
21/40 layers of convolutional resources of decreasing resolution, so that each layer of convolutional resources is adequate to efficiently identify sequentially larger objects in the image data. FIG. 3 illustrates the structure of the SSD model, including layers of extra features with decreasing resolution. The modified SSD model generates bounding boxes that identify objects in an input image 300. A bounding box is a set of values that identify a part of the input image 300 (for example, the x and y position of the center of a rectangle, the width of the rectangle and the height of the rectangle). The modified SSD model uses a set of standard bounding boxes as guides to determine bounding boxes for objects. Each resource layer 310 implements a process for generating a set of resource maps 320 for each location in a set of locations within the input of the resource layer (for example, the input image or resource maps of a previous resource layer ). The resource map sets 320 generated at each resource layer 310 contain resources (for example, values that identify or quantify the existence of an item characteristic to be determined or identified) related to each object class for which the model Modified SSD is trained to detect each standard bounding box used by the modified SSD model and to compensate for a bounding box at each location within the entry associated with the generated set of resource maps (again, the input image or a resource map of a layer of previous resources). Offsets describe how a standard bounding box can be converted or stretched to fit a bounding box
Petition 870190115090, of 11/08/2019, p. 34/84
22/40 for an object in the original input image.
[0052] In an SSD model, the output of resource map 320 from each resource layer 310 is used as input to the next resource layer 310, over a sequence of decreasing size resource layers (for example, sequential resource layers can be smaller than previous layers). As a result, resource layers 310 of progressively smaller sizes are used to generate bounding boxes more effectively on objects that are progressively larger in the original image, and resource layers 310 of larger size are used to generate bounding boxes more effectively on objects. within the original image. The freckles 320 of each individual resource layer 310 (for example, resource maps) are inserted into a classifying portion 330 (which can be a convolutional neural network or CNN), which uses the maps of each resource layer to generate bounding boxes which classify any objects within these resource maps and therefore classify any object present in the original image.
[0053] While the modified SSD model uses feature maps of larger feature layers to identify smaller objects in the original image, the modified SSD model is particularly modified to improve the detection of small objects in the original image, because the SSD model of the base has difficulty identifying smaller objects. FIG. 4 illustrates two graphics from the original SSD document mentioned above that show the accuracy of the SSD300 and SSD500 for extra small (XS), small (S), small (S), medium (M), large (L) and extra-large (XL) objects ). The SSD model of
Petition 870190115090, of 11/08/2019, p. 35/84
23/40 baseline is insufficient to perform the bounding box tasks described above because the SSD is ineffective in generating bounding boxes for smaller objects. FIG. 4 illustrates that the SSD300 and SSD500 do not accurately detect XS and some S objects and are therefore insufficient for detecting bounding boxes for smaller plants in image data.
II.D.2. SSD model improvements
[0054] The mobile treatment platform 100 can improve the computer processing performance of the plant detection model 180 by pre-processing image data received from the camera, as described above. In addition, the 180 plant detection model can be enhanced in the object detection performance of the standard SSD model (as measured by sensitivity, specificity, precision or other statistical measure) by incorporating one or more modifications to the standard SSD model. The modified SSD model can include any one or more of the following techniques: [0055] Batch normalization: The values generated by individual layers of the modified SSD model can be normalized to avoid internal covariable displacement. Batch normalization can improve the efficiency of the modified SSD model.
[0056] Linear units rectified with leakage: A linear unit rectified with leakage can be activated when the input to the unit is a positive value. If the input is not a positive value, the unit will output a value equal to the input value multiplied by a value between 0 and 1. If the input is a positive value, the unit will output a value equal to the input value.
Petition 870190115090, of 11/08/2019, p. 36/84
24/40
[0057] Residual Neural Networks: A residual neural network generates output values that are the sum of some function value and an input value. Thus, the residual neural network generates output values incremental to the input values. Residual neural networks can improve the efficiency and accuracy of the modified SSD model.
[0058] Custom anchor boxes: The standard boxes detected by the SSD model can be adjusted to detect plants and spray patterns of sizes more effective than expected from a certain PPI in the processed images. For example, standard boxes can be adjusted by reducing the size of standard boxes used and can apply standard boxes with higher resolution. By customizing the standard boxes, the accuracy of the modified SSD model can be improved, especially with regard to the identification of small objects.
[0059] Clean labeled data: Strange data can be removed from training data to generate more effective training. For example, the labeled image data can be cleaned up or improved by having multiple human operators label the image data. Cleaning the labeled image data can improve the accuracy of the modified SSD model.
[0060] Higher spatial resolution on resource maps: The amount of downsampling performed on image data or on resource maps between layers of the modified SSD model may be less than the downsampling performed on the baseline SSD model, thus increasing the resolution and therefore the accuracy of the resource maps of the modified SSD Model.
[0061] Space transformers: The dimensions of the data
Petition 870190115090, of 11/08/2019, p. 37/84
25/40 images or feature maps can be scaled between layers of the neural network.
[0062] Loss optimization training: Many models are trained to reduce the value of a loss function. The loss function used in the modified SSD model may differ from that of the baseline. Training to optimize a loss function can improve the accuracy of the modified SSD model.
[0063] Weighted Softmax: Each object class can be assigned a weight to balance the class's imbalances for objects detected in the image data. The weighted softmax can then be used to identify the objects in the image data more accurately. [00 64] Fusion of the resource map: In some embodiments, the modified SSD model uses resource maps of lower resolution along with resource maps of higher resolution to identify smaller objects in the images. As described above, the deeper layers of the modified SSD model generate lower resolution resource maps that are used to identify larger objects in the images analyzed by the modified SSD model. These lower resolution feature maps also include features that describe larger parts of the image than the higher resolution feature maps generated by shallower layers of the modified SSD model. The modified SSD model can be structured so that the lower resolution resource maps generated by deeper layers of the modified SSD model are combined with the higher resolution resource maps to more effectively identify small objects in shallower layers of the model SSD
Petition 870190115090, of 11/08/2019, p. 38/84
26/40 modified. In some embodiments, the lower resolution resource maps are combined with the higher resolution resource maps by concatenating the lower resolution resource maps with the higher resolution resource maps before being processed by the convolutional neural network .
[0065] Mining in the background: To improve the performance of the modified SSD model during its operation, the modified SSD model can be trained so that neurons in the neural networks of the modified SSD model are trained to identify background objects in images with only a threshold level of accuracy. Generally, the images used to train the modified SSD model generally have more background regions for training than foreground regions, due to the proportional division of soil and plants in the images. For example, many images of plants in the early stages of growth, during which it would be interesting to obtain images, will contain approximately 10% of plant material by covering the surface area in relation to 90% of the surrounding soil. If the model is trained without taking this difference into account, the neurons in the network of the modified SSD model will be excessively trained to accurately identify background objects (for example, soil) at the expense of being able to sufficiently identify foreground objects ( for example, plants).
[0066] To improve the performance of the modified SSD model in the recognition of objects in the foreground, neurons of the neural networks can be trained using a subset of objects in the background available in the images of
Petition 870190115090, of 11/08/2019, p. 39/84
27/40 training. When neurons of the modified SSD model are trained based on a new background object, neurons with a background object identification accuracy above a threshold value cannot be trained based on the new background object. In other words, if a neuron can identify background objects with an accuracy that exceeds a threshold value, the neuron cannot be trained on the basis of new background objects from labeled training images. These neurons can continue to be trained on the basis of new foreground objects from labeled training images, thereby improving the overall ability of the modified SSD model to identify foreground objects.
[0067] In addition, some parts of the training images can be labeled as foreground objects or background objects with different confidence levels by the resource maps of the modified SSD model prior to the entry into CNN of the modified SSD model. For example, the center of a plant can be labeled as a foreground object with a high level of confidence, while the boundary area between the edge of a leaf and the bottom can be labeled as a foreground object with a low level reliable. The modified SSD model can be trained only with training image objects with high levels of confidence, such as a foreground or background object. The modified SSD model is thus trained using more accurate representations of objects in the foreground and objects in the background, which improves the ability of the modified SSD model to identify and distinguish objects in the foreground and objects in the background
Petition 870190115090, of 11/08/2019, p. 40/84
28/40 plane in the images.
[0068] Training enhancements: Additional training images for the modified SSD model can be generated by enhancing existing training images using image enhancements that replicate real-world phenomena. Real-world phenomena can affect images captured by the mobile treatment platform during operation. For example, the color temperature of images can be affected by the time of day when images are captured. However, due to the expense of operating mobile treatment platforms in crop fields to capture training images, it may not be possible to collect enough training images to replicate all possible values of various real-world phenomena that can actually be experienced. during the operation of the mobile treatment platform in the field. Training of the modified SSD model using only images representing the captured environmental conditions limits the ability of the modified SSD model to identify objects in conditions that do not correspond to those in which the training images were captured.
[0069] The training images used to train the modified SSD model can be enlarged (and, in some cases, duplicated and then the copies are increased) to replicate real-world phenomena that can impact images captured by the mobile treatment platform during the operation. For example, one or more of the following enhancements can be applied to labeled training images:
[0070] Color temperature: the color temperature of
Petition 870190115090, of 11/08/2019, p. 41/84
29/40 training images can be adjusted to reproduce differences in color temperatures that can occur at different times of the day (for example, sunrise, sunset or midday) or with different lighting conditions (for example, sunlight or artificial lights);
[0071] Two-dimensional blur: a two-dimensional blur can be applied to image training to replicate the blur caused by a change in the camera's distance from the ground;
[0072] One-dimensional blur: a one-dimensional blur can be applied to image training to replicate the blur caused by the movement of the mobile treatment platform;
[0073] Gain: the gain of training images can be adjusted to replicate the over or under exposure that can occur when the mobile treatment platform captures images;
[0074] Noise: noise can be applied to training images to replicate conditions that may affect the quality of images captured by the camera (for example, dirty lenses, fog / fog or low quality cameras);
[0075] Rotation / inversion: the training images can be rotated or inverted to replicate changes in the direction of travel of the mobile treatment platform or in the orientation of the camera on the mobile treatment platform;
[0076] Pixel flicker: the objects identified in the training images can be moved slightly (for example, by a few pixels) to ensure that the predictions made by the modified SSD model are independent of the absolute location of the plants in the images.
Petition 870190115090, of 11/08/2019, p. 42/84
30/40
[0077] Sets of new training images can be generated by applying the increases in the original training images. In some embodiments, each set of new training images may correspond to one or more of the increases applied to the original training images to generate the set of new training images. When training the modified SSD model based on the augmented training images, the modified SSD model is better able to detect objects in the images captured under the conditions of the enlarged images.
[0078] In some embodiments, the magnifications are applied to the original training images using magnification parameters. The increase parameters configure how an increase is applied to the training images. For example, for gain gain, the degree to which the gain is adjusted can be specified by gain gain parameters. Listed below is a non-exhaustive example of training increase parameter ranges that can be used for enhancements applied to the original training images: Color temperature: 2000K - 9500K Blur 1D or 2D: 1 pixel - 15 pixels Gain: -14dB - 5dB
Noise: 0.00-0.28 noise level
[0079] An increase can be applied to the original training images more than once with different increase parameters to generate sets of new training images. In some embodiments, a predetermined range of augmentation parameters is used for various augmentation applications. The predetermined interval of
Petition 870190115090, of 11/08/2019, p. 43/84
31/40 increase parameters can be a range of increase parameters that minimize the likelihood that the performance of the modified SSD model will decrease due to excessive training. FIG. 5 illustrates two graphs of the performance of an SSD model modified for different training increase parameters for an increase in gain, according to some embodiments. More specifically, FIG. 5 illustrates the recovery and precision performance of a modified SSD model with different gain gain parameters. The gain increase parameters can be limited to a predetermined range of 500 gain increase parameters that ensure that the modified SSD model maintains a sufficient level of performance, ensuring that the modified SSD model is trained to identify crops and weeds in images affected by real phenomena. FIG. 6 also illustrates two graphs of the performance of a modified SSD model for different training boost parameters for increased noise and illustrates a predetermined range of 600 boost parameters that can be used to generate new training images.
[0080] Recycling based on uncertainty: The modified SSD model can be retrained based on the uncertainty of the results generated by the modified SSD model. For example, in some embodiments, a subset of neurons used by the modified SSD model can be selected to be removed from the identification of objects in an image. Discarded neurons are not used by the modified SSD model to identify objects in the image. Objects identified by the modified SSD model without the use of selected neurons can be compared to objects
Petition 870190115090, of 11/08/2019, p. 44/84
32/40 identified by the modified SSD model using the selected neurons to determine whether the modified SSD model was sufficiently trained to identify the objects. In some cases, the uncertainties for the identified objects, identified with and without the selected neurons, can be compared to determine whether the modified SSD model has been sufficiently trained to identify the objects. If the identified objects or uncertainties generated by the modified SSD model without the selected neurons differ significantly (for example, greater than a borderline difference) from the identified objects or uncertainties generated by the modified SSD model with the selected neurons, the image for which the identified objects of the modified SSD model can be used to further train the modified SSD model. In some embodiments, the images used to further train the modified SSD model are passed through a labeling process by which the images are labeled for training (for example, the images can be transmitted to a human labeler for manual labeling) .
[0081] Alternatively or additionally, identified objects or uncertainties generated by the modified SSD model can be compared with identified objects or uncertainties generated by a teacher model. The teacher model is a model that can more accurately identify the objects in the images captured by the mobile treatment platform, although the teacher model can be more computationally intensive than the modified SSD model. If the identified objects or uncertainties generated by the modified SSD model differ
Petition 870190115090, of 11/08/2019, p. 45/84
33/40 significantly from the identified objects or uncertainties generated by the teacher model, the image for which the objects were identified can be used to further train the modified SSD model.
II.D.3. Performance of exemplary modified SSD models [0082] Several of these techniques and tools can be employed to improve the modified SSD model over the standard SSD model. The techniques that are used in a platform implementation can vary based on the desired execution time, the desired driving speed of the platform 100 across the field, the desired precision in the generation of bounding boxes, etc. Thus, the specific design of the plant detection model 180 and, in a set of embodiments, the modified SSD model, may vary according to the desired implementation. Table 1 illustrates examples of modified SSD models, along with a comparison of their accuracy against the baseline SSD model when the models are run on a Jetson TX2:
Table 1
Techniques Precision Runtime Standard SSD 42% 600ms SSD, batch normalization, leaking ReLU and residual networks 57% 190 ms SSD, Batch normalization, Leaked ReLU, residual nets, custom anchor boxes, clean labeled data and increased spatial resolution on resource maps 77% 17 5 ms SSD, batch normalization, leaking ReLU, custom anchor boxes, clean labeled data, increased spatial resolution on resource maps and training loss optimization 85% <150 ms
Petition 870190115090, of 11/08/2019, p. 46/84
34/40
[0083] Thus, various combinations of these techniques can improve the baseline SSD model.
[0084] FIGS. 7A and 7B illustrate this improvement in the detection of weeds 700 around cultures 710 treated by the mobile treatment platform, according to some embodiments. FIG. 7B uses a conventional SSD neural network model to detect cultures 710 and weeds 700. FIG. 7A uses an implementation of the modified SSD model to accomplish the same task. The modified SSD model identifies additional weeds 720 that are not identified by the conventional neural network model.
[0085] FIGS. 7C and 7D illustrate similar improvements. Modalities of the modified SSD model identify additional crop plants 730 and weeds 740 in FIG. 7C from those identified 750 by the baseline SSD model in FIG. 7D. In addition, FIG. 7F illustrates additional weeds that are identified by bounding boxes 760 generated by the modified SSD model that are not generated by the baseline SSD model, as illustrated in FIG. 7E.
[0086] Additional testing of some embodiments of the modified SSD model illustrated the improvements over conventional SSD models that can be achieved by improving the training of the modified SSD model, as described here (for example, extension of training). Tests of the modified SSD model use images of plants in a field with a variety of different parameters, such as different crop growth stages, weed pressures, soil conditions, cultivated and uncultivated soil, time of day and health conditions of the plants. These parameters
Petition 870190115090, of 11/08/2019, p. 47/84
35/40 ensure that the test images cover a wide range of scenarios that the mobile treatment platform can encounter in the field. An embodiment of the modified SSD model and a conventional SSD model identifies crops and weeds in the image test set and the objects identified by the modified SSD model and the conventional SSD model are compared. It was found that the modified SSD model shows a 20% improvement in weed identification performance and a 15% improvement in crop identification performance. [0087] More generally, although the SSD models modified above use several layers of downsampling of input images / resource maps to separately identify bounding boxes of various sizes using neural networks specifically in each of these layers, in practice, this identification multilayer can be applied using a different technique other than a neural network in each layer. For example, several downsampling versions of the original input image or resource maps created based on the original image can be inserted into other types of machine learning models. The reduction of sampling between layers preserves the efficiency of time, and each applied layer is also applied to a model trained to process images at this level of sampling reduction, preserving accuracy.
II.F. Spray box detection
[0088] As mentioned above, the plant detection model 180 can be configured to perform the task of identifying bounding boxes for portions of soil that have already been treated by the treatment mechanism 110. This model
Petition 870190115090, of 11/08/2019, p. 48/84
36/40 can be referred to as a spray box detection model, for clarity, and not as a 180 plant detection model. However, in implementation, it is broadly similar to the 180 plant detection model in functionality general. The significant difference between the two is what must be detected in the image data. Any of the techniques described above to improve the baseline SSD model for detecting bounding boxes mentioned above can also be used to fit a similar spray box detection model, where a similar principle applies, in which an implementer you can choose which specific techniques to apply based on a desired level of sensitivity, specificity and desired efficiency. FIGS. 8A, 8B and 8C illustrate an example of implementing the model that identifies bounding boxes 800 for treated soil stains on dark soil, light soil and with a light leak under a cover used to normalize the light, respectively. In an example of an embodiment, the model is specifically designed to more heavily penalize a location error.
III Examples of uses
III.A. Treatment instructions
[0089] The treatment application module 190 provides instructions for the treatment mechanism 110 to treat plants in the field based on the output of the plant detection model 180. The treatment application module 190 can provide instructions to the treatment mechanism 110 to activate or deactivate the treatment mechanism 110. The treatment application module 190 can also provide instructions directing where the treatment mechanism
Petition 870190115090, of 11/08/2019, p. 49/84
37/40
110 applies treatment (for example, instructions to rotate or tilt the treatment mechanism 110).
[0090] The treatment application module 190 uses the bounding boxes generated by the plant detection model 180 to treat plants identified by the bounding boxes. For example, treatment application module 190 may provide instructions to treatment mechanism 110 to apply fertilizer to areas identified by bounding boxes representing crops. As another example, treatment application module 190 may provide instructions to treatment mechanism 110 to apply an herbicide to areas identified by bounding boxes as representing weeds. Treatment application 190 may also apply treatment to plants based on bounding boxes that identify where treatment mechanism 110 had previously been applied.
III.B. Generate bounding boxes for plants
[00 91] FIG. 9 is a flow chart illustrating a method for identifying plant bounding boxes, according to an embodiment. The mobile treatment platform receives 900 image data from a camera on the mobile treatment platform. The camera can capture image data from cultures as the mobile treatment platform travels through a field of cultures. The mobile treatment platform applies 910 pre-processing steps to the received image data. The mobile treatment platform detects 920 bounding boxes using a plant detection model. Bounding boxes identify parts of the image data that represent plants. Bounding boxes
Petition 870190115090, of 11/08/2019, p. 50/84
38/40 can also identify the species of the plant, as well as the confidence that the bounding boxes are accurate. The mobile treatment platform applies the 930 treatment to the plants in the fields based on the bounding boxes. For example, the mobile treatment platform can use the bounding boxes to apply fertilizer to crops in the field and apply herbicide to weeds in the field as the mobile treatment platform travels through the field.
III.D. Generate bounding boxes for treated areas [0092] FIG. 10 is a flow chart illustrating a method for identifying bounding boxes for treated areas, according to some embodiments. The mobile treatment platform receives 1000 image data from a camera on the mobile treatment platform. The camera can capture image data from cultures as the mobile treatment platform travels through a field of cultures. The mobile treatment platform applies 1010 pre-processing steps to the received image data. The mobile treatment platform detects 1020 bounding boxes using a spray box detection model. The mobile treatment platform uses the spray box detection model to generate bounding boxes that identify parts of the image data that represent treated areas. Bounding boxes can also specify the confidence that bounding boxes are accurate. The mobile treatment platform applies the 1030 treatment to the plants in the fields based on the bounding boxes.
[0093] In one embodiment, the bounding boxes detected by the spray box detection model are used in conjunction with the detected bounding boxes
Petition 870190115090, of 11/08/2019, p. 51/84
39/40 by the plant detection model. The mobile treatment platform can initially identify a set of bounding boxes to apply a treatment to using the spray detection model. The mobile treatment platform can then also use the bounding boxes detected by the spray box detection model to avoid applying treatment to an area that has already been treated. This can be achieved by removing areas of the field from the treatment plan that, although inside a bounding box detected by the plant detection model, are also inside a bounding box detected by the spray box detection model.
IV. Additional considerations
[0094] Although the disclosure described here mainly describes a modified SSD model, the principles and modifications contained herein can be applied to other models of bounding boxes, such as RCNN or YOLO. Any of the steps, operations or processes described here can be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some embodiments, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor to perform any or all of the steps, described operations or processes.
[0095] Computer 140 discussed above may be specially built for the required purposes and / or may comprise a general purpose enabled computing device
Petition 870190115090, of 11/08/2019, p. 52/84
40/40 selectively or reconfigured by a computer program stored on the computer. This computer program can be stored on a computer-readable, non-transitory and tangible storage medium or on any type of media suitable for storing electronic instructions, which can be coupled to a computer system bus. In addition, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs to increase computing power. Modalities can also refer to a product that is produced by a computing process described here. This product may comprise information resulting from a computing process, where the information is stored on a computer-readable, non-transitory, and tangible storage medium and may include any embodiment of a computer program product or other combination of data here described.
[0096] Finally, the language used in the specification was selected mainly for the purposes of readability and instruction, and may not have been selected to outline or circumscribe the inventive object. Therefore, it is intended that the scope of patent rights is limited not by this detailed description, but by any claims that are issued in an application based on this document. Therefore, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of patent rights, which are set out at least in part in the following claims.
权利要求:
Claims (14)
[1]
1. Method, characterized by the fact of understanding:
- capture an image from a camera mounted on a plant treatment platform that passes through or over a field of crops, with the image comprising data representing one or more plants;
- apply, with a computer physically attached to the plant treatment platform, one or more pre-processing functions for the image data, with the pre-processing functions preparing the image data for processing by a plant detection model ;
- insert, with the computer, the preprocessed image data in the installation detection model to generate one or more plant bounding boxes, with the plant bounding boxes outlining parts of the pre-processed image data representing one or more plants; and
- apply, with a treatment mechanism mounted on the plant treatment platform, a treatment for one or more plants based on the bounding boxes of generated plants.
[2]
2. Method, according to claim 1, characterized by the fact that the camera is positioned above one or more plants and is directed downwards.
[3]
3. Method, according to claim 1, characterized by the fact that the camera is positioned next to one or more plants and is directed towards one or more plants.
[4]
4. Method, according to claim 1, characterized by the fact that the camera is positioned above one or more plants and is tilted towards a direction of displacement of the plant treatment platform.
Petition 870190115090, of 11/08/2019, p. 54/84
2/4
[5]
5. Method, according to claim 1, characterized by the fact that the plant detection model comprises one or more submodels, each submodel identifying a different plant species and in which each of the one or more bounding boxes is generated using a submodel of that submodel and comprises an identifier for a plant species captured by the bounding box.
[6]
6. Method, according to claim 1, characterized by the fact that each of the bounding boxes comprises a confidence measure that represents a confidence that each of the bounding boxes will capture one of the one or more plants.
[7]
7. Method, according to claim 1, characterized in that the plant detection model comprises a modified version of a SingleBox MultiBox Detector model.
[8]
8. Method, according to claim 7, characterized by the fact that the plant detection model uses at least one of the following techniques: batch normalization, linear rectified units with leakage, residual neural networks, personalized anchoring boxes, labeled data clean, increased spatial resolution in feature maps, spatial transformers, loss optimization training or weighted softmax.
[9]
9. Method, according to claim 1, characterized by the fact that the pre-processing functions comprise at least one of applying Bayer algorithm, cutting, white balance, resizing, exposure control or value normalization.
[10]
10. Method according to claim 9, characterized
Petition 870190115090, of 11/08/2019, p. 55/84
3/4 due to the fact that the pre-processing functions include normalization of value for a fixed PPI.
[11]
11. Method according to claim 1, characterized in that the mobile treatment platform also comprises a transport mechanism that allows the mobile treatment platform to travel through the crop field.
[12]
12. Method, according to claim 1, characterized by the fact that it further comprises: transmitting instructions to a treatment mechanism to treat one or more plants based on the bounding boxes of generated plants.
[13]
13. Method, according to claim 1, characterized by the fact that the plant detection model is trained based on labeled image data.
[14]
14. Computer readable medium, characterized by the fact that it understands instructions that, when executed by a processor, cause the processor to:
- capture an image from a camera mounted on a plant treatment platform that passes through or over a field of crops, with the image comprising data representing one or more plants;
- apply, with a computer physically attached to the plant treatment platform, one or more preprocessing functions for the image data, with the preprocessing functions preparing the image data for processing by a plant detection model;
- insert, with the computer, the preprocessed image data in the installation detection model to generate one or more plant bounding boxes, with the plant bounding boxes delineating parts of the pre-processed image data representing one or more plants; and
Petition 870190115090, of 11/08/2019, p. 56/84
Ml ·
- apply, with a treatment mechanism mounted on the plant treatment platform, a treatment for one or more plants based on the bounding boxes of generated plants.
类似技术:
公开号 | 公开日 | 专利标题
BR112019023576A2|2020-06-02|METHOD AND MEDIA LEGIBLE BY COMPUTER
Bauer et al.2011|The potential of automatic methods of classification to identify leaf diseases from multispectral images
Maldonado Jr et al.2016|Automatic green fruit counting in orange trees using digital images
Li et al.2020|A review of computer vision technologies for plant phenotyping
Story et al.2015|Design and implementation of a computer vision-guided greenhouse crop diagnostics system
Dyrmann et al.2016|Pixel-wise classification of weeds and crops in images by using a fully convolutional neural network
CN109635875A|2019-04-16|A kind of end-to-end network interface detection method based on deep learning
CN108182423A|2018-06-19|A kind of poultry Activity recognition method based on depth convolutional neural networks
Shinde et al.2014|Crop detection by machine vision for weed management
Tillett et al.2001|A field assessment of a potential method for weed and crop mapping on the basis of crop planting geometry
Loresco et al.2019|Segmentation of lettuce plants using super pixels and thresholding methods in smart farm hydroponics setup
Lin et al.2020|A deep-level region-based visual representation architecture for detecting strawberry flowers in an outdoor field
Kalampokas et al.2021|Grape stem detection using regression convolutional neural networks
Niu et al.2015|Segmentation of cotton leaves based on improved watershed algorithm
Ramirez et al.2020|Deep convolutional neural networks for weed detection in agricultural crops using optical aerial images
Xiang et al.2020|PhenoStereo: A high-throughput stereo vision system for field-based plant phenotyping-with an application in sorghum stem diameter estimation
Gurumurthy et al.2019|Mango Tree Net--A fully convolutional network for semantic segmentation and individual crown detection of mango trees
Sharma et al.2021|SVM-based compliance discrepancies detection using remote sensing for organic farms
Drees et al.2021|Temporal prediction and evaluation of Brassica growth in the field using conditional generative adversarial networks
Tian et al.2021|Machine learning-based crop recognition from aerial remote sensing imagery
Knoll et al.2018|The German Vision of Industry 4.0 Applied in Organic Farming
Ashok Kumar et al.2013|A review on crop and weed segmentation based on digital images
Rakhmatulin2020|Artificial Intelligence in Weed Recognition Tasks
Garcia-Pedrero et al.2015|Automatic identification of shrub vegetation of the Teide National Park
Chen et al.2021|Mapping agricultural plastic greenhouses using Google Earth images and deep learning
同族专利:
公开号 | 公开日
CN111163628A|2020-05-15|
US20210406540A1|2021-12-30|
WO2018208947A1|2018-11-15|
EP3675621A1|2020-07-08|
US20180330166A1|2018-11-15|
US11093745B2|2021-08-17|
EP3675621A4|2021-02-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5253302A|1989-02-28|1993-10-12|Robert Massen|Method and arrangement for automatic optical classification of plants|
US5668719A|1994-08-05|1997-09-16|Tyler Limited Partnership|Method of fertilizer application and field treatment|
US5900925A|1997-05-09|1999-05-04|Service Vision, S.A.|Computer assisted camera control system|
US6930710B1|1998-11-13|2005-08-16|Cnh Canada, Ltd.|Method of and apparatus for processing a video image|
US7424133B2|2002-11-08|2008-09-09|Pictometry International Corporation|Method and apparatus for capturing, geolocating and measuring oblique images|
US7496228B2|2003-06-13|2009-02-24|Landwehr Val R|Method and system for detecting and classifying objects in images, such as insects and other arthropods|
US20050271292A1|2004-06-04|2005-12-08|Hekkers Jeffrey A|Lenticular imaging file manipulation method|
CN100437629C|2005-10-08|2008-11-26|中国农业机械化科学研究院|Method for automatic identifying weeds in field and medicine spraying device|
US10390497B2|2013-03-07|2019-08-27|Blue River Technology, Inc.|System and method for plant treatment|
US9658201B2|2013-03-07|2017-05-23|Blue River Technology Inc.|Method for automatic phenotype measurement and selection|
US9030549B2|2012-03-07|2015-05-12|Blue River Technology, Inc.|Method and apparatus for automated plant necrosis|
WO2014100502A1|2012-12-19|2014-06-26|Alan Shulman|Methods and systems for automated micro farming|
JP2014238731A|2013-06-07|2014-12-18|株式会社ソニー・コンピュータエンタテインメント|Image processor, image processing system, and image processing method|
CN103870816B|2014-03-26|2016-11-23|中国科学院寒区旱区环境与工程研究所|The method of the plants identification that a kind of discrimination is high|
WO2016025848A1|2014-08-15|2016-02-18|Monsanto Technology Llc|Apparatus and methods for in-field data collection and sampling|
CN104361342A|2014-10-23|2015-02-18|同济大学|Online plant species identification method based on geometric invariant shape features|
WO2016090414A1|2014-12-10|2016-06-16|The University Of Sydney|Automatic target recognition and dispensing system|
CN104535575A|2015-01-25|2015-04-22|无锡桑尼安科技有限公司|Crop maturity identification platform based on unmanned aerial vehicle detection|
WO2016119022A1|2015-01-30|2016-08-04|The University Of Sydney|Statically stable robot using wheel with inner system|
WO2016123656A1|2015-02-05|2016-08-11|The University Of Sydney|Horticultural harvesting system and apparatus using revolving shells|
US9922261B2|2015-04-16|2018-03-20|Regents Of The University Of Minnesota|Robotic surveying of fruit plants|
AU2016269849B2|2015-06-05|2019-08-15|Agerris Pty Ltd|Automatic target recognition and management system|
WO2017002093A1|2015-07-02|2017-01-05|Ecorobotix Sàrl|Robot vehicle and method using a robot for an automatic treatment of vegetable organisms|
US10491879B2|2016-01-15|2019-11-26|Blue River Technology Inc.|Plant feature detection using captured images|
CN109689480B|2016-06-17|2021-12-10|艾格瑞斯有限公司|Drive module|CA3061912A1|2017-05-08|2018-11-15|PlantSnap, Inc.|Systems and methods for electronically identifying plant species|
US10474926B1|2017-11-16|2019-11-12|Amazon Technologies, Inc.|Generating artificial intelligence image processing services|
US10460420B2|2017-11-22|2019-10-29|The Government Of The United States Of America, As Represented By The Secretary Of The Navy|Converting spatial features to map projection|
US10713484B2|2018-05-24|2020-07-14|Blue River Technology Inc.|Semantic segmentation to identify and treat plants in a field and verify the plant treatments|
US10713542B2|2018-10-24|2020-07-14|The Climate Corporation|Detection of plant diseases with multi-stage, multi-scale deep learning|
US11157765B2|2018-11-21|2021-10-26|Samsung Electronics Co., Ltd.|Method and system for determining physical characteristics of objects|
EP3886571A1|2018-11-29|2021-10-06|Germishuys, Dennis Mark|Plant cultivation|
CN110125945A|2018-11-29|2019-08-16|重庆智田科技有限公司|A kind of plant row follower method of harvesting robot|
CN109858547A|2019-01-29|2019-06-07|东南大学|A kind of object detection method and device based on BSSD|
EP3924876A1|2019-02-12|2021-12-22|Tata Consultancy Services Limited|Automated unsupervised localization of context sensitive events in crops and computing extent thereof|
US11210199B2|2019-05-31|2021-12-28|Ati Technologies Ulc|Safety monitor for invalid image transform|
US20200401883A1|2019-06-24|2020-12-24|X Development Llc|Individual plant recognition and localization|
WO2021043904A1|2019-09-05|2021-03-11|Basf Se|System and method for identification of plant species|
CN110728223A|2019-10-08|2020-01-24|济南东朔微电子有限公司|Helmet wearing identification method based on deep learning|
US11246256B2|2020-02-09|2022-02-15|Stout Industrial Technology, Inc.|Machine vision control system for precision agriculture|
WO2021176254A1|2020-03-05|2021-09-10|Plantium S.A.|System and method of detection and identification of crops and weeds|
CN111652012B|2020-05-11|2021-10-29|中山大学|Curved surface QR code positioning method based on SSD network model|
CN113506339A|2021-09-10|2021-10-15|北京科技大学|Method and device for controlling working power of equipment and storage medium|
法律状态:
2021-10-19| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US201762503770P| true| 2017-05-09|2017-05-09|
US62/503,770|2017-05-09|
US201762580290P| true| 2017-11-01|2017-11-01|
US62/580,290|2017-11-01|
PCT/US2018/031845|WO2018208947A1|2017-05-09|2018-05-09|Automated plant detection using image data|
[返回顶部]